- Monday, September 2, 2024
Nearly 200 Google DeepMind employees signed a letter urging Google to terminate military contracts, claiming a violation of the company's own AI ethics principles. DeepMind technology has been bundled into Google Cloud and sold to militaries, sparking internal conflict with AI staff who value ethical standards. Google's response demonstrated an adherence to AI Principles, but workers remain unsatisfied, seeking stronger governance against military use of their AI.
- Thursday, June 6, 2024
A group of current and former OpenAI employees, along with two from Google DeepMind, have published an open letter criticizing OpenAI's prioritization of profits and growth over AI safety. The whistleblowers claim that OpenAI has used restrictive agreements to silence employees' concerns. While some see the letter as evidence that AI safety concerns are valid, others criticize it for lacking concrete details and focusing on hypothetical risks.
- Wednesday, April 10, 2024
DeepMind founder Demis Hassabis now leads Google's unified AI research arm, aiming to maintain the tech giant's edge in the AI landscape with breakthroughs like AlphaGo and AlphaFold. Despite the success, challenges in integrating AI into tangible products and competition from entities such as OpenAI's ChatGPT persist. Hassabis, recognized for his significant contributions to AI, must now navigate Google's product strategy to leverage DeepMind's research advancements.
- Wednesday, April 10, 2024
DeepMind founder Demis Hassabis now leads Google's unified AI research arm, aiming to maintain the tech giant's edge in the AI landscape with breakthroughs like AlphaGo and AlphaFold. Despite the success, challenges in integrating AI into tangible products and competition from entities such as OpenAI's ChatGPT persist. Hassabis, recognized for his significant contributions to AI, must now navigate Google's product strategy to leverage DeepMind's research advancements.
- Wednesday, June 5, 2024
A group of current and former AI employees is calling for advanced AI companies to commit to principles that ensure transparency and protection for employees who raise risk-related concerns. They highlight the need for companies to avoid enforcing non-disparagement agreements, facilitate anonymous reporting processes, support open criticism, and prevent retaliation against whistleblowers.
- Tuesday, July 2, 2024
Tech companies like Google and Meta are updating their privacy policies to use public and possibly private user data to train AI systems as they navigate a complex landscape of privacy laws and user consent. Backlashes have occurred as users and creators are fearful about the use of their content in training AI that could potentially replace them. The tensions highlight emerging challenges in AI development, data privacy rights, and the balance between innovation and ethics in the tech industry.
- Thursday, May 2, 2024
An email sent to Microsoft co-founder Bill Gates and CEO Satya Nadella by Kevin Scott, Microsoft's chief technology officer with the subject line 'Thoughts on OpenAI' in mid-June 2019 detailed Scott's concerns about Google's developments in AI and how it might take Microsoft multiple years to even attempt to compete with Google. Microsoft had tried to keep the email hidden, but it was made public as part of the US Justice Department's antitrust trial over Google's alleged search monopoly. Mere weeks after the email, Microsoft had invested $1 billion into OpenAI, and it has increased its investment significantly further since.
- Monday, September 9, 2024
OpenAI is restructuring its management and organization to attract major investors like Microsoft, Apple, and Nvidia while aiming for a $100 billion valuation. The company faces internal conflicts about its mission and safety practices, leading to significant staff turnover, including key researchers joining rivals like Anthropic. Despite growing revenues and user base, OpenAI grapples with balancing profit motives and ethical concerns in advancing AI technologies.
- Monday, August 26, 2024
Google DeepMind's AGI Safety & Alignment team shared a detailed update on their work focused on existential risk from AI. Key areas include amplified oversight, frontier safety, and mechanistic interpretability, with ongoing efforts to refine their approach to technical AGI safety. They highlighted recent achievements, collaborations, and plans to address emerging challenges.
- Thursday, May 23, 2024
Ilya Sutskever has departed from OpenAI amidst concerns over the company's commitment to AI safety, signaling a potentially worrying trend as three other key personnel have also recently resigned. These departures raise questions about the impact on the company's safety-focused mission and its nonprofit status as it pursues commercialization. These events may also reverberate through legal and regulatory landscapes, prompting scrutiny from stakeholders in Washington.
- Monday, March 4, 2024
Google's recent AI debacle with its Gemini image and text generation tool has sparked an internal crisis, with plummeting morale and calls for CEO Sundar Pichai's resignation. While the public focuses on potential bias in the tool, the root cause lies in organizational chaos: unclear ownership of Gemini's development and a lack of accountability within Google teams. This dysfunction led to hasty decision-making and insufficient coordination, resulting in the tool's problematic outputs.
- Thursday, May 2, 2024
Google has laid off at least 200 employees from its Core teams. The reorganization will involve moving some roles to India and Mexico. The Core unit is responsible for building the technical foundation behind Google's flagship products and for protecting users' online safety. The Core layoffs also include the governance and protected data group, which will be at the center of regulatory challenges facing the company. Affected employees have access to outplacement services and will be able to apply for open roles within Google.
- Monday, April 15, 2024
OpenAI has reportedly fired two researchers who were allegedly linked to the leaking of company secrets following months of leaks and company efforts to crack down on such incidents.
- Wednesday, May 22, 2024
The departures of Ilya Sutskever and Jan Leike have brought to light OpenAI's restrictive NDA, which prevents former employees from criticizing OpenAI under threat of losing their vested equity. CEO Sam Altman responded to the story by promising a fix.
- Friday, May 3, 2024
Mustafa Suleyman, co-founder of DeepMind, has been named the chief executive of Microsoft AI. He will contribute to Microsoft's expansion into AI consumer products, while his former colleague and DeepMind's other co-founder, Demis Hassabis, will lead AI research at Google. The journey of these two influential figures reflects the personal and competitive undercurrents driving the race to develop the next major computing platform.
- Wednesday, July 17, 2024
Several major AI companies, including Anthropic, Nvidia, Apple, and Salesforce, used subtitles from 173,536 YouTube videos across 48,000 channels to train AI models, despite YouTube's rules against unauthorized data harvesting. This has sparked backlash from content creators, who argue that their work has been exploited without consent or compensation, raising concerns about AI's impact on the creative industry and the ethics of using such training data.
- Thursday, May 9, 2024
This is an internal Microsoft email from 2019 that reveals concerns about falling behind Google in AI and the motivation for partnering with OpenAI. Microsoft watched how Google Search and other AI integrations were getting better while it struggled to replicate the same success. Google's BERT-based models were ahead of Microsoft not just due to talent, but also because Google had ample AI infrastructure in place.
- Wednesday, July 17, 2024
Whistleblowers have accused OpenAI of placing illegal restrictions on employee communication with government regulators, according to a letter obtained by The Washington Post, which urges the SEC to investigate OpenAI's severance, non-disparagement, and non-disclosure agreements for violating employees' rights to whistleblower incentives and compensation.
- Monday, August 19, 2024
Eric Schmidt, the former CEO of Google, recently had a talk at Stanford. This is the transcript of the talk, in which he discusses the rapid advancements in AI and its potential impact on the world. He predicts that within the next year or two, significant changes will arise due to the integration of three key AI advancements: large context windows, AI agents, and text-to-action capabilities. He also mentioned some controversial things about OpenAI and Google's work culture, which led to the talk being taken down.
- Thursday, May 23, 2024
OpenAI has had a provision in place since 2019 that means employees leaving the company risk losing their vested equity in the company if they criticize the company. Employees who refuse to cooperate risk being locked out of future tender offers, stopping them from selling their stock. A company threatening to claw back already-vested equity is unusual. OpenAI's behavior towards former employees in the past suggests that the executive team knew about the provision.
- Tuesday, August 13, 2024
OpenAI's founding team is experiencing significant turnover, with only 2 of the 11 original members currently active, as concerns grow over the organization's shift away from its initial non-profit ideals toward a more profit-driven structure. This exodus includes co-founders Greg Brockman (on sabbatical) and Ilya Sutskever (who has left), amid speculation of burnout and lucrative secondary financial rewards. The organization faces challenges as it may require a new major cash partner and anticipates delays in the release of GPT-5, while the industry considers the merits of "open" versus "closed" AI models.
- Monday, July 22, 2024
Former Google DeepMind scientists at Artificial Agency unveiled an AI behavior engine for dynamic NPCs in video games. They have raised $16 million to enhance in-game interactions. They're working with renowned AAA studios and expecting widespread adoption by 2025 despite potential cost implications. Their engine offers more realistic and responsive game characters without predefined scripts.
- Wednesday, April 17, 2024
ALOHA Unleashed by Google DeepMind is a new generation of AI-powered robots with impressive dexterity. DeepMind recently released a series of videos showing the robots hanging shirts, inserting precise gears, and even tying shoelaces. The robots can generalize to untrained objects. The videos are available in the link.
- Tuesday, September 10, 2024
Google's AI Overviews, powered by the Gemini language model, faced heavy criticism for inaccuracies and dangerous suggestions after its U.S. launch. Despite the backlash, Google expanded the feature to six more countries, raising concerns among publishers about reduced traffic and misrepresented content. AI strategists and SEO experts emphasize the need for transparency and better citation practices to maintain trust and traffic.
- Monday, May 27, 2024
There are many examples on social media of Google's new AI Overview product saying weird stuff. Google is now racing to manually disable AI Overviews for specific searches. The company has been testing AI Overviews for a year now, with over a billion queries served in that time. It says that examples seen on social media have been uncommon queries and that there were also examples that were doctored or unreproducible.
- Friday, April 19, 2024
Mathematicians and Google's DeepMind researchers have utilized AI to find large collections of objects that lack specific patterns, assisting in understanding potential catastrophic failures like internet severing due to server outages. Their approach employs large language models to iteratively generate and refine set-free collections, facilitating the study of worst-case scenarios. This research reflects the combined power of AI and human ingenuity in tackling complex problems.
- Monday, August 12, 2024
Intel reportedly passed on acquiring a stake in OpenAI, which is now a significant player in AI, during 2017-2018 due to then-CEO Bob Swan's hesitation on AI market readiness.
- Friday, August 23, 2024
This post is a long in-depth summary of what DeepMind is working on in its AGI safety and alignment research efforts.
- Monday, May 27, 2024
Google's AI Overview product has been generating bizarre responses, prompting the company to manually disable the feature for specific searches. Errors include suggestions to eat glue or rocks. Google maintains that its AI outputs high-quality information, though it acknowledges some errors and is working on improvements.
- Friday, August 30, 2024
OpenAI and Anthropic have agreed to allow the US government early access to their major new AI models before public release to enhance safety evaluations as part of a memorandum with the US AI Safety Institute.